Moral Machines: Teaching Robots Right From Wrong by Wendell Wallach & Colin Allen

Moral Machines: Teaching Robots Right From Wrong by Wendell Wallach & Colin Allen

Author:Wendell Wallach & Colin Allen [Wallach, Wendell & Allen, Colin]
Language: eng
Format: epub, pdf
Tags: Computers, Intelligence (AI) & Semantics, Philosophy, Ethics & Moral Philosophy, Mind & Body, Computer Science
ISBN: 9780195374049
Google: 4ApBmQEACAAJ
Amazon: 0199737975
Publisher: Oxford University Press
Published: 2008-11-19T00:00:00+00:00


biological creatures depend on specialized receptors called nociceptors—

neurons that are dedicated to detecting noxious stimuli. Pain is not simply the result of high-intensity stimulation of pressure and temperature receptors.

Huggable uses thresholds to label stimuli as “unpleasant.” Although this fails to capture the subtle operation of the biological pain system, it may provide a reasonable fi rst approximation. Nevertheless, a full system would need to be alert to the fact that in humans, pain is often context-specifi c and dependent on the integration of a range of factors. For example, in the late autumn one’s tolerance for cold is typically much lower than in the winter. Ears burn painfully on the fi rst cold mornings. But as body metabolism readjusts, humans beyond reason 151

are quite capable of accommodating much colder winter temperatures with

ease. A (ro)bot following Asimov’s First Law, to do or allow no harm to a

human being, would need to be aware of the facts of human pain sensitivity.

A quick way to this knowledge is to have the same sensitivities.

While we anticipate the development of neural nets for integrating sensory input from a range of sources, for the near term this data will be translated into cognitive representations of emotions rather than actual somatic states that could be counted as (ro)bots having emotions or feelings of their own.

If the actual capacity to feel pleasure and pain is essential for understanding how other people will be affected by different courses of action, (ro)bots will fall short in their discernment and moral acumen, not to mention their ability to be empathetic or compassionate. A truly compassionate (ro)bot is a tall order, and perhaps out of reach as far as any technology known today.

What isn’t out of reach, however, is the development of artifi cial systems capable of reading the emotions (minds?) of humans and interacting as if

they understand the intentions and expectations of humans.

Affective Computing 1: Detecting

Emotions

I ask this as an open question . . . and I don’t know the answer:

How far can a computer go in terms of doing a good job

handling people’s emotions and knowing when it is appropriate

to show emotions without actually having the feelings?

—Rosalind Picard

Rosalind Picard’s Affective Computing Research Group at MIT wants to

make it less frustrating to work with computers. Frustration is an emotion

everyone can relate to when dealing with technology that seems stupid and

infl exible. A fi rst step on the way to reducing frustration is to have computers and robots that can recognize frustration—in other words, that can recognize an emotion.

But computers and robots don’t have telepathy or any special access to

people’s inner feelings. Engineers are exploring techniques that emulate the ability to read the same nonverbal cues (facial expressions, tone of voice, body posture, hand gestures, and eye movements) that help people understand each other.

This is the relatively new fi eld of affective computing, and it encom-

passes a variety of different research goals. Modeling and studying human

emotions and building systems with the intelligence to recognize, categorize, and respond to those emotions are separate but also overlapping goals for

152 moral machines

research in affective computing.



Download



Copyright Disclaimer:
This site does not store any files on its server. We only index and link to content provided by other sites. Please contact the content providers to delete copyright contents if any and email us, we'll remove relevant links or contents immediately.